23 research outputs found

    Building machines that learn and think about morality

    Get PDF
    Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine learning. We also discuss how work in embodied and situated cognition could provide a valu- able perspective on future research

    The Ethics of Automated Vehicles

    Get PDF

    Why Trolley Problems Matter for the Ethics of Automated Vehicles

    Get PDF

    The sensitivity argument against child euthanasia

    Get PDF

    Legal Necessity, Pareto Efficiency & Justified Killing in Autonomous Vehicle Collisions

    Get PDF

    Autonomy, Nudging and Post-Truth Politics

    Get PDF

    A Dilemma for Reasons Additivity

    Get PDF
    This paper presents a dilemma for the additive model of reasons. Either the model accommodates disjunctive cases in which one ought to perform some act \phi just in case at least one of two factors obtains, or it accommodates conjunctive cases in which one ought to \phi just in case both of two factors obtains. The dilemma also arises in a revised additive model that accommodates imprecisely weighted reasons. There exist disjunctive and conjunctive cases. Hence the additive model is extensionally inadequate. The upshot of the dilemma is that one of the most influential accounts of how reasons accrue to determine what we ought to do is flawed

    On algorithmic fairness in medical practice

    Get PDF
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and fairness in healthcare. In this paper, we provide the building blocks for an account of algorithmic bias and its normative relevance in medicine

    Engaging Engineering Teams Through Moral Imagination: A Bottom-Up Approach for Responsible Innovation and Ethical Culture Change in Technology Companies

    Full text link
    We propose a "Moral Imagination" methodology to facilitate a culture of responsible innovation for engineering and product teams in technology companies. Our approach has been operationalized over the past two years at Google, where we have conducted over 50 workshops with teams across the organization. We argue that our approach is a crucial complement to existing formal and informal initiatives for fostering a culture of ethical awareness, deliberation, and decision-making in technology design such as company principles, ethics and privacy review procedures, and compliance controls. We characterize some of the distinctive benefits of our methodology for the technology sector in particular.Comment: 16 pages, 1 figur
    corecore